Track 3 – Wireless Networks

Session T3-S10

Mobile Edge Computing 1

Conference
11:00 AM — 12:30 PM KST
Local
May 26 Tue, 7:00 PM — 8:30 PM PDT

Joint Computation Offloading, SFC Placement, and Resource Allocation for Multi-Site MEC Systems

Phuong-Duy Nguyen (INRS - University of Quebec, Canada); Long Bao Le (INRS, University of Quebec, Canada)

1
Network function Visualization (NFV) and Mobile Edge Computing (MEC) are promising 5G technologies to support resource-demanding mobile applications. In NFV, one must process the service function chain (SFC) in which a set of network functions must be executed in a specific order. Moreover, the MEC technology enables computation offloading of service requests from mobile users to remote servers to potentially reduce energy consumption and processing delay for the mobile application. This paper considers the optimization of the computation offloading, resource allocation, and SFC placement in the multi-site MEC system. Our design objective is to minimize the weighted normalized energy consumption and computing cost subject to the maximum tolerable delay constraint. To solve the underlying mixed integer and non-linear optimization problem, we employ the decomposition approach where we iteratively optimize the computation offloading, SFC placement and computing resource allocation to obtain an efficient solution. Numerical results show the impacts of different parameters on the system performance and the superior performance of the proposed algorithm compared to benchmarking algorithms.

Task Offloading for End-Edge-Cloud Orchestrated Computing in Mobile Networks

Chuan Sun, Hui Li, Xiuhua Li, Wen Junhao and Qingyu Xiong (Chongqing University, China); Xiaofei Wang (Tianjin University, China); Victor C.M. Leung (University of British Columbia, Canada)

1
Recently, mobile edge computing has received widespread attention, which provides computing infrastructure via pushing cloud computing, network control, and storage to the network edges. To improve the resource utilization and Quality of Service, we investigate the issue of task offloading for End-Edge- Cloud orchestrated computing in mobile networks. Particularly, we jointly optimize the server selection and resource allocation to minimize the weighted sum of the average cost. A cost minimization problem is formulated under joint the constraints of cache resource and communication/computation resource of edge servers. The resultant problem is a Mixed-Integer Non-linear Programming, which is NP-hard. To tackle this problem, we decompose it into simpler subproblems for server selection and resource allocation, respectively. We propose a low-complexity hierarchical heuristic approach to achieve server selection, and a Cauchy-Schwards Inequality based closed-form approach to efficiently determine resource allocation. Finally, simulation results demonstrate the superior performance of the proposed scheme on reducing the weighted sum of the average cost in the network.

MeFILL: A Multi-edged Framework for Intelligent and Low Latency Mobile IoT Services

Ruichun Gu, Lei Yu and Junxing Zhang (Inner Mongolia University, China)

1
With the development of the cellular network in the last decade, the number of IoT devices is growing exponentially and IoT applications are becoming more complex with higher requirements for Key Performance Indicators (KPIs) such as latency, accuracy and energy consumption. To address these challenges, the edge computing paradigm is often adopted to push the computing capabilities to the edge servers nearest to end-users. However, the Quality of Experience (QoE) of IoT applications is still hard to guarantee because the nearest edge servers change while users roam around. In this paper, we propose MeFILL, a Multi-edged Framework for Intelligent and Low Latency mobile IoT applications, which reduces the latencies and improves the reliability with the seamless handover of IoT devices between edge servers and leverages the Distributed Deep Learning (DDL) collaboration among edge servers. The comparison experiments show that MeFILL can effectively optimize performance KPIs of mobile IoT applications.

MEC-Enabled Wireless VR Video Service: A Learning-Based Mixed Strategy for Energy-Latency Tradeoff

Chong Zheng (School of Information Science and Engineering, Southeast University, China); Shengheng Liu (Southeast University, P.R. China); Yongming Huang and Luxi Yang (Southeast University, China)

3
Mobile edge computing (MEC) has received broad attention as an effective network architecture and a key enabler of the wireless virtual reality (VR) video service which is expected to take a huge share of communication traffic. In this work, we investigate the scenario of multi-tiles-based wireless VR video service with the aid of MEC network, where the primary objective is to minimize the system energy consumption and the latency as well as to arrive at a tradeoff between these two metrics. To this end, we first cast the time-varying view popularity as a modelfree Markov chain and use a long short-term memory autoencoder network to predict its dynamics. Then, a mixed strategy, which jointly considers the dynamic caching replacement and the deterministic offloading, is designed to fully utilize the caching and computing resource in the system. The underlying multi- objective optimization problem is reformulated as a partially observable Markov decision process and solved by using a deep deterministic policy gradient algorithm. The effectiveness of the proposed scheme is confirmed by numerical simulations.

Learning Based Fluctuation-aware Computation offloading for Vehicular Edge Computing System

Zhitong Liu, Xuefei Zhang, Jian Zhang, Dian Tang and Xiaofeng Tao (Beijing University of Posts and Telecommunications, China)

1
Vehicular edge computing (VEC) is a promising paradigm to satisfy the ever-growing computing demands by offloading computation tasks to vehicles equipped with computing servers. One of the major challenges in VEC system is the highly dynamic and uncertain moving route of vehicular servers. In order to address this challenge, a particular kind of vehicles (i.e., buses) is adopted as moving servers with the pre-designated route and timetable. On this basis, a fluctuation-aware learning- based computation offloading (FALCO) algorithm based on multi-armed bandit (MAB) theory is proposed. Specifically, base stations (BSs) are regarded as agents to learn the state of moving server so as to construct a stable observation set in the dynamic vehicular environment. In addition, the softmax function is applied to indicate the probability for each decision, which provides more flexible policies for obtaining better results. Simulation results demonstrate that our proposed FALCO algorithm can improve delay performance compared with the other existing learning algorithms.

Session Chair

Shengheng Liu (Southeast University & Purple Mountain Laboratories, China)

Play Session Recording
Session T3-S11

NOMA (Non-Orthogonal Multiple Access)

Conference
11:00 AM — 12:30 PM KST
Local
May 26 Tue, 7:00 PM — 8:30 PM PDT

Online maneuver design for UAV-enabled NOMA systems via reinforcement learning

Yuwei Huang (University of Science and Technology of China, China); Xiaopeng Mo (GDUT, China); Jie Xu (The Chinese University of Hong Kong, Shenzhen, China); Ling Qiu (University of Science and Technology of China, China); Yong Zeng (Southeast University, China)

0
This paper considers an unmanned aerial vehicle (UAV)-enabled uplink non-orthogonal multiple-access (NOMA) system, where multiple users on the ground send independent messages to a UAV via NOMA transmission. We aim to design the UAV's dynamic maneuver in real time for maximizing the sumrate throughput of all ground users over a finite time horizon. Different from conventional offline designs considering static user locations under deterministic or stochastic channel models, we consider a more challenging scenario with mobile users and segmented channel models, where the UAV only causally knows the users' (moving) locations and channel state information (CSI). Under this setup, we first propose a new approach for UAV dynamic maneuver design based on reinforcement learning (RL) via Q-learning. Next, in order to further speed up the convergence and increase the throughput, we present an enhanced RL-based approach by additionally exploiting expert knowledge of well- established wireless channel models to initialize the Q-table values. Numerical results show that our proposed RL-based and enhanced RL-based approaches significantly improve the sumrate throughput, and the enhanced RL-based approach considerably speeds up the learning process owing to the proposed Q-table initialization.

NOMA based VR Video Transmission Exploiting User Behavioral Coherence

Ping Xiang, Hangguan Shan, Zhaoyang Zhang and Yu Lu (Zhejiang University, China); Tony Q. S. Quek (Singapore University of Technology and Design, Singapore)

0
In this work, we study the cooperative and noncooperative transmission schemes design for live VR video broadcast scenarios by utilizing non-orthogonal multiple access (NOMA), considering that users' viewports partly overlap due to behavioral coherence. To characterize the performance of the proposed cooperative and non-cooperative transmission schemes, the exact and asymptotic expressions of outage probability, as well as the average outage capacity under imperfect successive interference cancellation (SIC), are derived, respectively. Based on the asymptotic outage probability results, we optimize the power allocation to maximize the average outage capacity of the proposed schemes. Finally, simulation results demonstrate that both of the proposed schemes can achieve a considerable performance gain over the traditional orthogonal multiple access (OMA) scheme in average outage capacity, and each of the proposed schemes has its advantages and applicable scenarios.

Joint User Association and Resource Allocation for NOMA-Based MEC: A Matching-Coalition Approach

Guangyuan Zheng, Chen Xu and Liangrui Tang (North China Electric Power University, China)

0
Mobile edge computing (MEC) is regarded as a key technology to reduce the network pressure from the computingintensive and latency-sensitive applications in future wireless networks. Non-orthogonal multiple access (NOMA) can achieve high spectral efficiency by allowing multiple users to reuse the same resources. In this paper, we consider a novel NOMA-based MEC system to improve the energy efficiency during task offloading process. With multiple access points (APs) being deployed, the optimization problem is joint user association and resource allocation while the objective is to minimize the total energy consumption of all users subject to the task execution deadline. We formulate the problem as a many-to-one matching game with externality due to the co-channel interference among users, and then, propose a matching-coalition approach coupled with computing resource allocation and power control. Simulation results show that the proposed approach can efficiently reduce the total energy consumption in comparison to other simplified approaches.

User Scheduling and Energy Management with QoS Provisioning for NOMA-based M2M Communications

Chunhui Feng and Qinghai Yang (Xidian University, China); Meng Qin (School of Electronics and Computer Engineering, Peking University, China); Kyung Sup Kwak (Inha University, Korea (South))

0
Non-orthogonal multiple access (NOMA) is considered as a potential technique to relieve the congestion due to concurrent access from massive devices in machine-to-machine (M2M) communication system. However, the cochannel interference caused by NOMA, and the energy budget of machinetype devices (MTDs), become the bottleneck to further improve the system performance. Given above issues, we formulate the joint user scheduling and energy management problem as a stochastic optimization problem. Specifically, the goal of the problem is to maximize the long-term average sum rate under the constraint of all MTDs' quality-of-service (QoS) requirements. For tractability, the stochastic problem is firstly transformed into two static subproblems based on Lyapunov optimization. Then, using successive convex approximation (SCA) method, we design an effective algorithm to deal with the joint user scheduling and power allocation subproblems, which is a mixed integer and non-convex programming (MINCP). Simulation results demonstrate that our proposed algorithm has a good performance in convergence and outperforms other schemes in terms of user satisfaction.

Joint Reflection Coefficient Selection and Subcarrier Allocation for Backscatter Systems with NOMA

Farhad Dashti Ardakani (The University of British Columbia, Canada); Vincent W.S. Wong (University of British Columbia, Canada)

0
Non-orthogonal multiple access (NOMA) and backscatter communication are two emerging technologies that enable low power communication for the Internet of Things (IoT) devices. In this paper, we consider a multicarrier NOMA (MC-NOMA) backscatter communication system. The objective is to maximize the aggregate data rate of the system by jointly optimizing the reflection coefficients and subcarrier allocation. The formulated problem is nonconvex and exhibits hidden monotonicity structure. To obtain the optimal solution, we propose an algorithm based on discrete monotonic optimization. The proposed algorithm can be considered as a performance benchmark. We also transform the nonconvex problem to another problem by using difference of convex functions and successive convex approximation and propose an algorithm to obtain a suboptimal solution in polynomial time. Simulation results show that the suboptimal scheme achieves an aggregate data rate close to the proposed optimal scheme. Results also show that our proposed schemes provide a higher aggregate data rate than the orthogonal multiple access (OMA) scheme.

Session Chair

Hangguan Shan (Zhejiang University, China)

Play Session Recording
Session T3-S12

Vehicular Network 1

Conference
11:00 AM — 12:30 PM KST
Local
May 26 Tue, 7:00 PM — 8:30 PM PDT

C2RC: Channel Congestion-based Re-transmission Control for 3GPP-based V2X Technologies

Gaurang Naik and Jung-Min (Jerry) Park (Virginia Tech, USA); Jonathan Ashdown (United States Air Force, USA)

2
The 3rd Generation Partnership Project (3GPP) is actively designing New Radio Vehicle-to-Everything (NR V2X)— a 5G NR-based technology for V2X communications. NR V2X, along with its predecessor Cellular V2X (C-V2X), is set to enable low-latency and high-reliability communications in high-speed and dense vehicular environments. A key reliability-enhancing mechanism that is available in C-V2X and is likely to be reused in NR V2X is packet re-transmissions. In this paper, using a systematic and extensive simulation study, we investigate the impact of this feature on the system performance of C-V2X. We show that statically configuring vehicles to always disable or enable packet re-transmissions either fails to extract the full potential of this feature or leads to performance degradation due to increased channel congestion. Motivated by this, we propose and evaluate Channel Congestion-based Re-transmission Control (C2RC), which, based on the observed channel congestion, allows vehicles to autonomously decide whether or not to use packet re-transmissions without any role of the cellular infrastructure. Using our proposed mechanism, C-V2X-capable vehicles can boost their performance in lightly-loaded environments, while not compromising on performance in denser conditions.

A User Association Policy for UAV-aided Time-varying Vehicular Networks with MEC

Bingqing Hang, Biling Zhang and Li Wang (Beijing University of Posts and Telecommunications, China); Jingjing Wang and Yong Ren (Tsinghua University, Beijing, China); Zhu Han (University of Houston, USA)

0
Multi-access edge computing (MEC) is viewed as a promising technology to improve the real time video service in vehicular networks. However, in the traditional vehicular networks, the road side units (RSUs) are usually only equipped with communication modules, and the unmanned aerial vehicles(UAVs) are seldom used. In this paper, a new UAV-aided time-varying vehicular network is introduced for vehicle users (VUEs) to obtain better experience, where the RSUs and the UAV are equipped with MEC servers for the real time video transcoding. Considering that the video service always lasts for a period of time, we investigate the user association policy from a long-term perspective. Specifically, to characterize the time- varying features of communication links and the heterogeneity of available resources, we theoretically derive the achievable video chunks and link reliability based on the vehicle mobility model and content caching model. Then, the user association problem is formulated as the utility optimization problem, where both the VUE's quality of experience (QoE) and handover cost are taken into consideration. Furthermore, we propose an improved Dijkstra algorithm to solve the original NP-hard problem after it is transformed to a shortest path selection problem. Finally, by numerical results, we verify that the proposed scheme out-performs existing schemes in terms of the VUE's QoE and the handover numbers.

Delay Sensitive Large-scale Parked Vehicular Computing via Software Defined Blockchain

Yuanyuan Cao and Yinglei Teng (Beijing University of Posts and Telecommunications, China); F. Richard Yu (Carleton University, Canada); Victor C.M. Leung (University of British Columbia, Canada); Ziqi Song and Mei Song (Beijing University of Posts and Telecommunications, China)

0
To utilize the potential commutating resources of parked vehicles (PVs) in the large parking lot, we design a large- scale parked vehicular computing system via software defined blockchain. However, the parking time for PVs is uncertain and some computational services have delay requirements. Therefore, in this paper, we propose a delay-sensitive joint blockchain parameters and resource optimization framework including block size and block generation time, as well as the offloading strategy and computing frequency adjustment. Such a design causes the problem to be highly coupled and non-convex, for which we use an alternating optimization (AO) strategy and perform multiple transformations to ensure convexity. Finally, the simulation results show the effectiveness of the proposed scheme.

Resource Allocation for AoI-Constrained V2V Communication in Finite Blocklength Regime

Shuai Gao and Meixia Tao (Shanghai Jiao Tong University, China)

0
The freshness of information is an important indicator for critical message exchange in vehicle-to-vehicle (V2V) communications. In this paper, we study a resource allocation problem to minimize the long-term power consumption under age of information (AoI) constraints in the finite blocklength (FBL) regime. Due to high reliability requirement, we consider the AoI violation probability which consists of decoding error probability and queue length violation probability. To ensure a short tail of the AoI distribution, we impose statistical constraints to the queue length utilizing extreme value theory (EVT). Applying Lyapunov optimization technique, the long-term problem is transformed into the drift-plus-penalty problem, which can be solved in each slot via a two-step method. In addition, in order to achieve the optimal power control and decoding error probability, we propose an efficient iterative algorithm and show the convexity of the optimization problem in each step. Simulation results show that our scheme achieves high reliability and short AoI tail compared to the baseline in the FBL regime.

A Spectrum Aware Mobility Pattern Based Routing Protocol for CR-VANETs

Sharmin Akter and Nafees Mansoor (University of Liberal Arts Bangladesh, Bangladesh)

2
Cognitive radio technology offers an important function in the efficient utilization of the radio spectrum. Besides, it is expected that CR-enabled vehicular ad-hoc networks (CR-VANETs) enrich the communication performance of the existing vehicular networks (VANETs). However, to ensure efficient performance in a multi-hop communication, the routing protocol in CR-VANETs needs to consider the autonomous mobility of the vehicles as well as the stochastic availability of the channels. Hence, this paper introduces a spectrum-aware mobility pattern based reactive routing protocol for CR-VANET. The proposed protocol accommodates the dynamic behavior and selects a stable transmission path from a source node to the destination. Therefore, the proposed protocol is outlined as a weighted graph problem where the weight for an edge is measured based on a parameter termed as NHDF (Next-hop Determination Factor). The NHDF implicitly considers mobility patterns of the nodes and channel availability to select the optimum path for communication. Therefore, in the proposed routing protocol, the mobility pattern of a node is defined from the viewpoint of distance, speed, direction, and node's reliability. Furthermore, the spectrum awareness in the proposed protocol is measured over the number of shared common channels and the channel quality. It is anticipated that the proposed protocol shows efficient routing performance by selecting stable and secured paths from source to destination. Simulation is carried out to assess the performance of the protocol where it is witnessed that the proposed routing protocol outperforms existing ones.

Session Chair

Anis Zarrad (University of Birmingham, UAE)

Play Session Recording
Session T3-S13

Mesh, Relay, and Ad Hoc Networks

Conference
2:00 PM — 3:30 PM KST
Local
May 26 Tue, 10:00 PM — 11:30 PM PDT

Multi-Channel Delay Sensitive Scheduling for Convergecast Network

Daoud Burghal (University of Southern California, USA); Kyeong Jin Kim (Mitsubishi Electric Research Laboratories (MERL), USA); Jianlin Guo (Mitsubishi Electronic Research Laboratories, USA); Philip Orlik (Mitsubishi Electric Research Laboratories, USA); Toshinori Hori (Mitsubishi Electric Corp., Japan); Takenori Sumi (Mitsubishi Electric Corporation, Japan); Yukimasa Nagai (Mitsubishi Electric Research Laboratories, USA)

0
Motivated by an increasing interest in wireless networking in mission-critical applications, and a recent amendment of the time slotted channel hopping to IEEE 802.15.4, the multi-channel delay sensitive scheduling is investigated in the many-to-one network, which is also known as the convergecast network. In such a network, each node has data to be transmitted to a gateway through multi-hop communications. As a realistic setting, packet release time at each node is not assumed to be uniform. Under this assumption, the goal of this work is to design a scheduling scheme that minimizes the schedule length and maximum end-to-end delay, in which the former is essential for repetitive data acquisition, whereas the later improves the freshness of the acquired data. To achieve the scheduling goal, the problem is formulated as a multi-objective integer programming. To obtain a feasible solution and gain an insight into the problem, a lower bound on the schedule length is derived. Based on that, a new scheduling scheme is designed to minimize the two objectives simultaneously. Link level simulations verify the performance improvement of the proposed scheme over the existing schemes.

Secure Routing Protocol in Wireless Ad Hoc Networks via Deep Learning

Feng Hu and Bing Chen (Nanjing University of Aeronautics and Astronautics, China); Dian Shi and Xinyue Zhang (University of Houston, USA); Haijun Zhang (University of Science and Technology Beijing, China); Miao Pan (University of Houston, USA)

0
Open wireless channels make a wireless ad hoc network vulnerable to various security attacks, so it is crucial to design a routing protocol that can defend against the attacks of malicious nodes. In this paper, we first measure the trust value calculated by the node behavior in a period to judge whether the node is trusted, and then combine other QoS requirements as the routing metrics to design a secure routing approach. Moreover, we propose a deep learning-based model to learn the routing environment repeatedly from the data sets of packet flow and corresponding optimal paths. Then, when a new packet flow is input, the model can output a link set that satisfies the node's QoS and trust requirements directly, and therefore the optimal path of the packet flow can be obtained. The extensive simulation results show that compared with the traditional optimization-based method, our proposed deep learning-based approach cannot only guarantee more than 90% accuracy, but also significantly improves the computation time.

Multi-Layer Function Computation in Disorganized Wireless Networks

Fangzhou Wu (University of Science and Technology of China, China); Chen Li (University of Science And Technology of China, China); Guo Wei (University of Sci. & Tech. of China, China)

0
For future wireless networks, enormous numbers of interconnections are required, creating a disorganized topology and leading to a great challenge in data aggregation. Instead of collecting data individually, a more efficient technique, computation over multi-access channels (CoMAC), has emerged to compute functions by exploiting the signal-superposition property of wireless channels. However, the implementation of CoMAC in disorganized networks with multiple relays (hops) is still an open problem. In this paper, we combine CoMAC and orthogonal communication in the disorganized network to attain the computation of functions at the fusion center. First, to make the disorganized network more tractable, we reorganize the disorganized network into a hierarchical network with multiple layers that consists of subgroups and groups. In the hierarchical network, we propose multi-layer function computation where CoMAC is applied to each subgroup and orthogonal communication is adopted within each group. The general computation rate is derived and the performance is further improved through time allocation.

Cognitive two-way relaying with adaptive network coding

Szu-Liang Wang (Chinese Culture University, Taiwan & Quanzhou Institute of Equipment Manufacturing, Haixi Institutes, Chinese Academy of Sciences, China); Tsan-Ming Wu (Chung Yuan Christian University, Taiwan)

0
In this paper, the overlay cognitive two-way relaying is considered, and the adaptive network coding (AdNC) protocol is proposed for improving the outage performance. The AdNC protocol switches between the digital and analog network coding schemes according to the decoding probability. The outage probabilities of the primary and secondary users are derived over Nakagami-m frequency-selective fading channels. Monte Carlo simulations are provided for verifying the accuracy of the derivations. Simulation and analytical results demonstrate that the proposed AdNC protocol possesses advantages of the digital and analog network coding schemes for the considered system.

End-to-end Throughput Optimization in Multi-hop Wireless Networks with Cut-through Capability

Liu Shengbo and Liqun Fu (Xiamen University, China)

0
In-band full-duplex (FD) technique can efficiently improve the end-to-end throughput of a multi-hop network via enabling multi-hop FD amplify-and-forward relaying (cut- through) transmission. This paper investigates the optimal hop size of a cut-through transmission and spatial reuse to achieve the maximum achievable end-to-end throughput of a multi-hop network. In particular, we consider spatial reuse and establish an interference model for a string-topology multi-hop network with χ-hop cut-through transmission, and show that the maximum achievable end-to-end throughput is a function of χ and the spatial separation between two concurrently active cut-through transmissions. Through extensive numerical studies, we show that the achievable date rate of a cut-through transmission drastically decreases along with the increase of the hop size χ. Furthermore, we find that the 2-hop cut-through transmission mode can always achieve the maximum end-to-end throughput using Shannon Capacity formula if the spatial reuse is properly addressed. On the other hand, the results show that the 5-hop cut- through transmission mode can obtain the maximum end-to-end throughput with discrete channel rates when the self-interference cancellation is perfect and the hop distance is small.

SourceShift: Resilient Routing in Highly Dynamic Wireless Mesh Networks

Andreas Ingo Grohmann (TU Dresden, Germany); Frank Gabriel and Sandra Zimmermann (Technische Universität Dresden, Germany); Frank H.P. Fitzek (Technische Universität Dresden & ComNets - Communication Networks Group, Germany)

0
Wireless networks have to support an increasing number of devices with increasing demands on mobility and resilience. Mesh network routing protocols provide an elegant solution to the problem of connecting mobile nodes, due to their ability to adapt to topology changes. However, with increasing number of nodes and increasing mobility of the nodes, maintaining sufficiently recent routing information becomes increasingly challenging. Existing routing protocols fail to operate reliably in case of sudden link or node failures. In this work, we propose a new routing approach called SourceShift to resiliently handle dynamic networks in the absence of current network status information. SourceShift uses opportunistic routing and network coding, like MORE, but also makes use of link local feedback, like ExOR. We evaluate SourceShift in random network topologies with link and node failures and compare the results with the state of the art. The evaluation shows that SourceShift can ensure the delivery of the message when feasible. Additionally, the use of local feedback can improve the airtime efficiency compared to other routing protocols, even in cases without link or node failures. As a result, SourceShift requires less than half the airtime of state of the art routing protocols in more than 60% of the evaluated cases.

Session Chair

Wei Liu (Chongqing University of Technology, P.R. China)

Play Session Recording
Session T3-S14

Measurement and Analytics 1

Conference
2:00 PM — 3:30 PM KST
Local
May 26 Tue, 10:00 PM — 11:30 PM PDT

Real Entropy Can Also Predict Daily Voice Traffic for Wireless Network Users

Sihai Zhang, Junyao Guo, Tian Lan, Rui Sun and Jinkang Zhu (University of Science and Technology of China, China)

0
Voice traffic prediction is significant for network deployment optimization thus to improve the network efficiency. The real entropy based theorectical bound and corresponding prediction models have demonstrated their success in mobility prediction. In this paper, the real entropy based predictability analysis and prediction models are introduced into voice traffic prediction. For this adoption, the traffic quantification methods is proposed and discussed. Based on the real world voice traffic data, the prediction accuracy of N-order Markov models, diffusion based model and MF model are presented, among which, 25-order Markov models performs best and approach close to the maximum predictability. This work demonstrates that, the real entropy can also predict voice traffic well which broaden the understanding on the real entropy based prediction theory.

Identifying Cell Sector Clusters Using Massive Mobile Usage Records

Zhe Chen and Emin Aksehirli (DataSpark Pte Ltd, Singapore)

0
Optimizing capital expenditure (CapEx) has been an increasingly important objective in telco operators' cell planning process. Traditionally, neighbor cell relation is operationally managed and independent from capacity planning. In this paper, we present SCUT, an algorithm that uses massive mobile usage records to detect clusters of possible capacity-sharing sectors, such that capacity planning can be optimized based on coverage. SCUT analyzes shared usage to build a graph-based model of an operator's network and identifies its disjoint dense components as best-fit abstractions of clusters. Through analysis and benchmarking on real data, we demonstrate its scalability and potential to improve industry-standard site-based planning. SCUT has been deployed for a telco operator in Southeast Asia.

SEdroid: A Robust Android Malware Detector using Selective Ensemble Learning

Ji Wang, Qi Jing, Jianbo Gao and Xuanwei Qiu (Peking University, China)

0
For the dramatic increase of Android malware and low efficiency of manual check process, deep learning methods started to be an auxiliary means for Android malware detection these years. However, these models are highly dependent on the quality of datasets, and perform unsatisfactory results when the quality of training data is not good enough. In the real world, the quality of datasets without manually check cannot be guaranteed, even Google Play may contain malicious applications, which will cause the trained model failure. To address the challenge, we propose a robust Android malware detection approach based on selective ensemble learning, trying to provide an effective solution not that limited to the quality of datasets. The proposed model utilizes genetic algorithm to help find the best combination of the component learners and improve robustness of the model. Our results show that the proposed approach achieves a more robust performance than other approaches in the same area.

Fine-grained Analysis and Optimization of Flexible Spatial Difference in User-centric Network

Danyang Wu (Beijing University of Posts and Telecommunications, China); Hongtao Zhang (Beijing University of Posts and Telecommunications & Key Lab of Universal Wireless Communications, Ministry of Education, China)

0
In user-centric network, traditional typical user analysis method based on spatial average results is no longer applicable due to the flexible spatial difference, which is the large fluctuations in user performance with spatial location. Especially because of power control leading to keen spatial competition, the spatial difference becomes much significantly, so that fine-grained analysis method is needed to evaluate its performance. This paper analyzes the spatial difference in user-centric network with power control through meta distribution from many different fine-grained perspectives to reveal that power control improves the performance not only in the sense of the spatial average, but also in the complete spatial distribution. Specifically, the complementary cumulative distribution function (CCDF) of the conditional transmitting success probability, the mean local delay and the 5%-tile users performance are given to depict power control effect on the individual links. This analysis provides the optimal values of the area and intensity for power control deployment in user-centric network. Numerical results show that after applying power control the users of high coverage probability can be improved at most by 38%, the mean local delay decreases by 2x and 4x gains can be obtained as for the 5%-tile user's performance.

Capacity Analysis of Distributed Computing Systems with Multiple Resource Types

Pengchao Han (Northeastern University, China); Shiqiang Wang (IBM T. J. Watson Research Center, USA); Kin K. Leung (Imperial College, United Kingdom (Great Britain))

0
In cloud and edge computing systems, computation, communication, and memory resources are distributed across different physical machines and can be used to execute computational tasks requested by different users. It is challenging to characterize the capacity of such a distributed system, because there exist multiple types of resources and the amount of resources required by different tasks is random. In this paper, we define the capacity as the number of tasks that the system can support with a given overload/outage probability. We derive theoretical formulas for the capacity of distributed systems with multiple resource types, where we consider the power of d choices as the task scheduling strategy in the analysis. Our analytical results describe the capacity of distributed computing systems, which can be used for planning purposes or assisting the scheduling and admission decisions of tasks to various resources in the system. Simulation results using both synthetic and real-world data are also presented to validate the capacity bounds.

Session Chair

Sihai Zhang (University of Science and Technology of China, P.R. China)

Play Session Recording
Session T3-S15

Services and Applications

Conference
2:00 PM — 3:30 PM KST
Local
May 26 Tue, 10:00 PM — 11:30 PM PDT

Deep Adaptation Networks Based Gesture Recognition using Commodity WiFi

Zijun Han and Lingchao Guo (Beijing University of Posts and Telecommunications, China); Zhaoming Lu (BUPT, China); Xiangming Wen (Beijing University of Posts and Telecommunications, China); Wei Zheng (BUPT, China)

0
Device-free gesture recognition plays a crucial role in smart home applications, setting human free from wearable devices and causing no privacy concerns. Prior WiFi-based recognition systems have achieved high accuracy in a static environment, but with limitations in adapting changes in environments and locations. In this paper, we propose a fine-grained deep adaptation networks based gesture recognition scheme (DANGR) using the Channel State Information (CSI). DANGR applies wavelet transformation for amplitude denoising, and conjugate calibration to remove CSI time-variant random phase offsets. A Generative Adversarial Networks (GAN) based data augmentation approach is proposed to reduce the large consumptions of data collection and the over-fitting risks caused by incomplete dataset. The distribution of CSI in various environments may be biased. In order to shrink these domains discrepancies in environments, we adopt domain adaptation based on multikernel Maximum Mean Discrepancy scheme, which matches the mean-embeddings of abstract representations across domains in a reproducing kernel Hilbert space. Extensive empirical evidence shows that DANGR yields mean 94.5% accuracy of gesture recognition confronting environmental variations, providing a promising scheme for practical and long-run implementation.

Non-intrusive leak monitoring system for pipeline within a closed space by wireless sensor network

Fang Wang and Weiguo Lin (Beijing University of Chemical Technology, China); Zheng Liu (University of British Columbia Okanagan, Canada); Liang Kong and Xianbo Qiu (Beijing University of Chemical Technology, China)

0
Non-intrusive detection is critical to protecting the integrity of pipelines. Based on the wireless sensor network, a novel leak monitoring system, composed of a computer center, a coordinator and wireless non-intrusive sensing nodes, is proposed for pipelines of closed spaces in this paper. The wireless nonintrusive sensing node with convenient installation and disassembly on the pipeline wall is designed. The proposed system can achieve signal synchronous sampling of all wireless non-intrusive sensing nodes by the coordinator wirelessly broadcasting the time information from its GPS to them, which is significant to guarantee the accuracy of the leak location. Based on the delay cross-correlation analysis, a leak location method is presented for multiple sensors. And experimental results demonstrate that the proposed system can accurately detect and locate pipeline leaks.

Smart Shopping Carts Based on Mobile Computing and Deep Learning Cloud Services

Muhmmad Atif Sarwar (National Chiao Tung University, Taiwan); Yousef-Awwad Daraghmi (Palestine Technical University Kadoorie, Palestine); Kuan-Wen Liu, Hong-Chuan Chi, Tsì-Uí İk and Yih-Lang Li (National Chiao Tung University, Taiwan)

2
Self-checkout systems enable retailers to reduce costs and customers to process their purchases quickly without waiting in queues. However, existing self-checkout systems suffer from design problems as they require large hardware consisting of a camera, sensors, RFID and other IoT technologies which increases the cost of such systems. Therefore, we propose a smart shopping cart with self-checkout, called iCart, to improve customer's experience at retail stores by enabling just walk out checkout and overcome the aforementioned problems. iCart is based on mobile cloud computing and deep learning cloud services. In iCart, a checkout event video is captured and sent to the cloud server for classification and segmentation where an item is identified and added to the shopping list. The Linux based cloud server contained the yolov2 deep learning network. iCart is a lightweight system of low cost solution which is suitable for the small-scale retail stores. The system is evaluated using real-world checkout video, and the accuracy of the shopping event detection and item recognition is about 97%. iCart demo can be found at URL: http://nol.cs.nctu.edu.tw/iCart/index.html.

CRED: Credibility-Enabled Social Network Based Q&A System for Assessing Answers Correctness

Imad Ali (Academia Sinica and National Tsing Hua University, Taiwan); Ronald Y. Chang (Academia Sinica, Taiwan); Cheng-Hsin Hsu (National Tsing Hua University, Taiwan)

0
In a question & answer (Q&A) system, credible users provide answers of higher correctness. However, in a distributed social network based Q&A (SNQ&A) system, an asker does not know a k-hop answerer's credibility, thus making it difficult for the asker to assess the answer correctness. Therefore, a credibility-enabled distributed SNQ&A system is crucial for determining the correctness of the answers. To this end, we propose CRED, a credibility-enabled distributed SNQ&A system, which facilitates each user to assess the correctness of the provided answers. CRED utilizes subjective logic to build interest- wise friend-to-friend credibility opinions under uncertainties. The developed opinions are then accumulated by CRED to get each user's aggregated credibility opinion, which may reflect the user's real credibility. CRED forwards a question to users with highest credibility beliefs in the question interest category. Our evaluation results show that, on average, CRED accomplishes higher success ratio, higher answer correctness, and lower answer uncertainty by 12.1%, 16.4%, and 22.2%, respectively, as compared to the best-performing baseline systems.

Maximizing Clearance Rate by Penalizing Redundant Task Assignment in Mobile Crowdsensing Auctions

Maggie E. Gendy and Ehab F. Badran (Arab Academy for Science, Technology and Maritime Transport, Egypt); Ahmad Al-Kabbany (Arab Academy for Science and Technology, Egypt)

1
This research is concerned with the effectiveness of auctions-based task assignment and management in centralized, participatory Mobile Crowdsensing (MCS) systems. During auctions, sensing tasks are matched with participants based on bids and incentives that are provided by the participants and the platform respectively. Recent literature addressed several challenges in auctions including untruthful bidding and malicious participants. Our recent work started addressing another challenge, namely, the maximization of clearance rate (CR) in sensing campaigns, i.e., the percentage of the accomplished sensing tasks. In this research, we propose a new objective function for matching tasks with participants, in order to achieve CR- maximized, reputation-aware auctions. Particularly, we penalize redundant task assignment, where a task is assigned to multiple participants, which can consume the budget unnecessarily. We observe that the less the bidders on a certain task, the higher the priority it should be assigned, to get accomplished. Hence, we introduce a new factor, the task redundancy factor, in managing auctions. Through extensive simulations under varying conditions of sensing campaigns, and given a fixed budget, we show that penalizing redundancy by giving higher priority to unpopular tasks yields significant CR increases of approximately 50%, compared to the highest clearance rates in the recent literature.

Session Chair

Ronald Y. Chang (Academia Sinica, Taiwan)

Play Session Recording
Session T3-S16

Mobile Edge Computing 2

Conference
4:00 PM — 5:30 PM KST
Local
May 27 Wed, 12:00 AM — 1:30 AM PDT

Resource Allocation for Multi-access Edge Computing with Coordinated Multi-Point Reception

Jian-Jyun Hung and Wanjiun Liao (National Taiwan University, Taiwan); Yi-Han Chiang (Osaka Prefecture University, Japan)

0
Multi-access edge computing (MEC) has emerged as a promising platform to provide user equipments (UEs) with timely computational services through the deployed edge servers. Typically, the size of an uplink task data (e.g., images or videos) required for processing is more pronounced than that of a downlink task result, and hence MEC offloading (MECO) plays a decisive role in the efficiency of MEC systems. In the light of an unprecedented growth of UEs in next-generation mobile networks, the reception of uplink signals at base stations (BSs) can be corrupted due to potential inter-user interference. To address this issue, coordinated multi-point (CoMP) reception which enables BSs to cooperatively receive uplink signals has evolved as an effective approach to enhance the received signal qualities. In this paper, we investigate a resource allocation problem for MECO with CoMP reception and formulate it as a mixed-integer non-linear program (MINLP). To solve this problem, we leverage the concept of interference graphs to characterize uplink inter-user interference, based on which we propose a resource allocation algorithm that consists of three phases: 1) computing resource allocation, 2) subcarrier allocation and cell clustering, and 3) subcarrier reuse and cell re-clustering. The simulation results show that our proposed solution can effectively enhance the delay performance of MECO through CoMP reception as compared with existing solution approaches under various system settings.

Joint Offloading and Resource Allocation for Time-Sensitive Multi-Access Edge Computing Network

Jun-jie Yu, Mingxiong Zhao, Wen-tao Li and Di Liu (Yunnan University, China); Shao Wen Yao (National Pilot School of Software,YunNan University, China); Wei Feng (Hangzhou Dianzi University, China)

0
In this paper, we investigate offloading scheme and resource allocation strategy for Orthogonal Frequency-Division Multiple Access (OFDMA) based multi-access edge computing (MEC) network to minimize the total system energy consumption. Partial data offloading is studied where mobile date can be computed at both local devices and the edge cloud with the consideration of time-sensitive tasks for users. Since the NP- hardness of the considered optimization problem, we propose an iterative algorithm to decide the proportion of data to offload and design the resource allocation strategy in a sequence. Simulation results show that the proposed algorithm achieves better performance than the reference schemes.

Computation Resource Allocation for Heterogeneous Time-Critical IoT Services in MEC

Jianhui Liu and Qi Zhang (Aarhus University, Denmark)

0
Mobile edge computing (MEC) is one of the promising solutions to process computational-intensive tasks within short latency for emerging Internet-of-Things (IoT) use cases, e.g., virtual reality (VR), augmented reality (AR), autonomous vehicle. Due to the coexistence of heterogeneous services in MEC system, the task arrival interval and required execution time can vary depending on services. It is challenging to schedule computation resource for the services with stochastic arrivals and runtime at an edge server (ES). In this paper, we propose a flexible computation offloading framework among users and ESs. Based on the framework, we propose a Lyapunov-based algorithm to dynamically allocate computation resource for heterogeneous time-critical services at the ES. The proposed algorithm minimizes the average timeout probability without any prior knowledge on task arrival process and required runtime. The numerical results show that, compared with the standard queuing models used at ES, the proposed algorithm achieves at least 35% reduction of the timeout probability, and approximated utilization efficiency of computation resource to non-cause queuing model under various scenarios.

Location-Privacy-Aware Service Migration in Mobile Edge Computing

Weixu Wang, Shuxin Ge and Xiaobo Zhou (Tianjin University, China)

0
This talk does not have an abstract.

Adaptive Task Partitioning at Local Device or Remote Edge Server for Offloading in MEC

Jianhui Liu and Qi Zhang (Aarhus University, Denmark)

0
Mobile edge computing (MEC) is one of the promising solutions to process computational-intensive tasks for the emerging time-critical Internet-of-Things (IoT) use cases, e.g., virtual reality (VR), augmented reality (AR), autonomous vehicle. The latency can be reduced further, when a task is partitioned and computed by multiple edge servers' (ESs) collaboration. However, the state-of-the-art work studies the MEC-enabled offloading based on a static framework, which partitions tasks at either the local user equipment (UE) or the primary ES. The dynamic selection between the two offloading schemes has not been well studied yet. In this paper, we investigate a dynamic offloading framework in a multi-user scenario. Each UE can decide who partitions a task according to the network status, e.g., channel quality and allocated computation resource. Based on the framework, we model the latency to complete a task, and formulate an optimization problem to minimize the average latency among UEs. The problem is solved by jointly optimizing task partitioning and the allocation of the communication and computation resources. The numerical results show that, compared with the static offloading schemes, the proposed algorithm achieves the lower latency in all tested scenarios. Moreover, both mathematical derivation and simulation illustrate that the wireless channel quality difference between a UE and different ESs can be used as an important criterion to determine the right scheme.

Session Chair

Qi Zhang (Aarhus University, Denmark)

Play Session Recording
Session T3-S17

Measurement and Analytics 2

Conference
4:00 PM — 5:30 PM KST
Local
May 27 Wed, 12:00 AM — 1:30 AM PDT

Mutation Testing Framework for Ad-hoc Networks Protocols

Anis Zarrad (University of Birmingham, United Arab Emirates); Izzat Alsmadi (Texas A&M San Antonio, USA)

0
Computing networks integrate systems, services, and users around the world. Hundreds of protocols contribute to making such process flawless. A fault in protocol design or implementation can impact many users and interrupt their tasks' workflows. Testing network protocols, whether static or dynamic, can take several approaches. Driven by the expansion of applications in different domains, in this paper, we evaluated fault-based testing techniques in testing network protocols. Out fault-based testing requirements are extracted based on network protocols' specifications. Our main goal is to test whether faultbased testing techniques can find faults or bugs that cannot be discovered by classical network protocols' testing techniques. One of the significant functional testing areas that fault-based techniques can work well in is conformance testing. They can test whether the network protocol is robust enough to validate test cases that conform with protocol specification and, on the other hand, invalidate test cases that do not show such conformance. We showed through several experiments that fault-based testing can prove conformance with less effort required through other testing approaches. Generated test scenarios serve as input for the network simulator. The quality of the test scenarios is evaluated based on three perspectives: (i) code coverage, (ii) mutation score, and (iii) testing effort. We implemented the testing framework in NS2. Experiments can be recreated using other simulation environments.

On the Performance of Multi-Gateway LoRaWAN Deployments: An Experimental Study

Konstantin Mikhaylov (University of Oulu & Solmu Technologies OY, Finland); Martin Stusek (Brno University of Technology, Czech Republic); Pavel Masek (Brno University of Technology & Member of WISLAB group, Czech Republic); Radek Fujdiak (Brno University of Technology, Czech Republic); Radek Možný (Brno Technical University, Czech Republic); Sergey Andreev (Tampere University, Finland); Jiri Hosek (Brno University of Technology, Czech Republic)

0
A remarkable progress in the Low Power Wide Area Network (LPWAN) technologies over the recent years opens new opportunities for developing versatile massive Internet of Things (IoT) applications. In this paper, we focus on one of the most popular LPWAN technologies operating in the license-exempt frequency bands, named LoRaWAN. The key contribution of this study is our unique set of results obtained during an extensive measurement campaign conducted in the city of Brno, Czech Republic. During a three-months-period, the connectivity of a public Long Range Wide Area Network (LoRaWAN) with more than 20 gateways (GWs) was assessed at 231 test locations. This paper presents an analysis of the obtained results, aimed at capturing the effects related to the spatial diversity of the GW locations and the real-life multi-GW network operation with all its practical features. One of our findings is the fact that only for 47% tested locations the GW featuring the minimum geographical distance demonstrated the highest received signal strength and signal-to-noise ratio (SNR). Also, our results captured and characterized the variations in the received signal strength indicator (RSSI) and SNR as a function of the communication distance in an urban environment, and illustrated the distribution of the spreading factors (SFs) as a result of the adaptive data rate (ADR) algorithm operation in a real-life multi-GW deployment.

Big Data Enabled Mobility Robustness Optimization for Commercial LTE Networks

Jaiju Joseph (Aalto University & Elisa Corporation, Finland); Furqan Ahmed and Tommi Jokela (Elisa Corporation, Finland); Olav Tirkkonen (Aalto University, Finland); Juho Poutanen and Jarno Niemelä (Elisa Corporation, Finland)

0
Mobility Robustness Optimization (MRO) is widely considered as an important self-organizing network (SON) use-case for tackling mobility management problems in LTE/LTE- Advanced networks. In this paper, we propose a data-driven centralized SON based MRO approach that relies on data from network configuration and performance management data sources to improve mobility performance in a fully automated manner. In particular, early and late handover statistics are used by the algorithm to make decisions regarding modification of mobility parameters. Based on performance management data from a live network, it is first observed that intra-frequency handovers provide the majority of handover problems, and that problems are predominantly cell-pair specific, not cell- specific. To increase mobility robustness, the cell individual offset configuration parameter is adjusted accordingly. The algorithm is deployed in a cluster of cells in a commercial LTE network. Results show that the algorithm is able to reduce radio link failure rates by up to 40 percent within two weeks, which underscores the potential of the proposed approach for commercial LTE networks.

A No-Reference Video Streaming QoE Estimator based on Physical Layer 4G Radio Measurements

Diogo F.M. Moura (Instituto Superior Técnico, Portugal); Marco Sousa (Instituto de Telecomunicações and Celfinet, Portugal); Pedro Vieira (Instituto de Telecomunicações and ISEL, Portugal); António J. Rodrigues (IT / Instituto Superior Técnico, Portugal); Maria Paula Queluz (Instituto Superior Técnico, Portugal)

1
With the increase in consumption of multimedia content through mobile devices (e.g., smartphones), it is crucial to find new ways of optimizing current and future wireless networks and to continuously give users a better Quality of Experience (QoE) when accessing that content. To achieve this goal, it is necessary to provide Mobile Network Operator (MNO) with real time QoE monitoring for multimedia services (e.g., video streaming, web browsing), enabling a fast network optimization and an effective resource management. This paper proposes a new QoE prediction model for video streaming services over 4G networks, using layer 1 (i.e., Physical Layer) key performance indicators (KPIs). The model estimates the service Mean Opinion Score (MOS) based on a Machine Learning (ML) algorithm, and using real MNO drive test (DT) data, where both application layer and layer 1 metrics are available. From the several considered ML algorithms, the Gradient Tree Boosting (GTB) showed the best performance, achieving a Pearson correlation of 78.9%, a Spearman correlation of 66.8% and a Mean Squared Error (MSE) of 0.114, on a test set with 901 examples. Finally, the proposed model was tested with new DT data together with the network's configuration. With the use case results, QoE predictions were analyzed according to the context in which the session was established, the radio transmission environment and radio channel quality indicators.

Monostatic Backscatter Communication in Urban Microcellular Environment Using Cellular Networks

Muhammad Usman Sheikh, Furqan Jameel, Huseyin Yigitler, Xiyu Wang and Riku Jäntti (Aalto University, Finland)

0
Backscatter communication offers a reliable and energy-efficient alternative to conventional radio systems. With the additional capability of wireless power transmission, the backscatter tags can work in a completely battery-less manner. Due to these exciting features, the researchers from academia and industry are extensively investigating its utility as an enabler of massive Internet of Things (IoT) networks. However, before reaping the benefits of ambient backscatter communications, it is necessary to evaluate its feasibility and operability for large scale networks. To do so, this research paper provides an empirical study of the real city wide deployment of monostatic wireless powered backscatter tags in Helsinki region. The coverage and outage performance has been assessed for both indoor and outdoor conditions using a sophisticated 3D ray tracing simulations. Whereby, the indoor scenario consists of several story buildings with low and high loss materials. Moreover, an in- depth evaluation of energy shortage at backscatter tags has also been provided which sheds the light on the importance of wireless power transmission for such networks. The results provided here would be helpful in upscaling the practical deployment of backscatter tags.

Session Chair

Konstantin Mikhaylov (University of Oulu, Finland)

Play Session Recording
Session T3-S18

Vehicular Network 2

Conference
4:00 PM — 5:30 PM KST
Local
May 27 Wed, 12:00 AM — 1:30 AM PDT

A Reinforcement Learning Approach for Efficient Opportunistic Vehicle-to-Cloud Data Transfer

Benjamin Sliwa and Christian Wietfeld (TU Dortmund University, Germany)

1
Vehicular crowdsensing is anticipated to become a key catalyst for data-driven optimization in the Intelligent Transportation System (ITS) domain. Yet, the expected growth in massive Machine-type Communication (mMTC) caused by vehicle-to-cloud transmissions will confront the cellular network infrastructure with great capacity-related challenges. A cognitive way for achieving relief without introducing additional physical infrastructure is the application of opportunistic data transfer for delay-tolerant applications. Hereby, the clients schedule their data transmissions in a channel-aware manner in order to avoid retransmissions and interference with other cell users. In this paper, we introduce a novel approach for this type of resource- aware data transfer which brings together supervised learning for network quality prediction with reinforcement learning- based decision making. The performance evaluation is carried out using data-driven network simulation and real world experiments in the public cellular networks of multiple Mobile Network Operators (MNOs) in different scenarios. The proposed transmission scheme significantly outperforms state-of-the-art probabilistic approaches in most scenarios and achieves data rate improvements of up to 181% in uplink and up to 270% in downlink transmission direction in comparison to conventional periodic data transfer.

Relay Selection and Coverage Analysis of Relay Assisted V2I Links in Microcellular Urban Networks

Blanca Ramos Elbal, Stefan Schwarz and Markus Rupp (TU Wien, Austria)

1
With the rising interest in vehicular communications many road safety applications have been developed over the last years. Road safety applications demand low end-to-end latency which can be supported by the large bandwidth available in the millimeter-wave (mm-wave) band. However, with growing carrier frequency the wireless network coverage degrades dramatically. In our work, we focus on enhancing the vehicle-to-infrastructure (V2I) link through idle vehicular users. We enable idle users to act as relays and to boost the signal from the Base Station (BS) to the users with poor quality links and therefore enhance the performance of the entire network. We analyze this approach in a 2-dimensional (2D) Manhattan grid where micro-cells and vehicular users are placed randomly. We consider a part of the users to be idle and select the one who maximizes the coverage improvement to boost the signal from the BS depending on the street, BS and user density. Based on techniques of stochastic geometry, we derive an analytical expression for the coverage probability of the direct link as well as the relay-assisted link and compare the analytical results to Monte Carlo system level simulations in order to validate our model.

Resource Scheduling for V2V Communications in Co-Operative Automated Driving

Prajwal Keshavamurthy (Universität Kassel, Germany); Emmanouil Pateromichelakis (Lenovo, Germany); Dirk Dahlhaus (University of Kassel, Germany); Chan Zhou (Huawei European Research Center, Germany)

0
Co-operative automated driving (CAD) use cases involve group-based vehicle-to-vehicle (V2V) communications with a wide range of quality-of-service (QoS) requirements. This work introduces and exploits fifth generation mobile networks (5G) functional architecture support for vehicle-to-everything (V2X) applications to address V2V sidelink radio resource management (RRM) for CAD use cases. A QoS requirement-aware sidelink resource allocation optimization problem is formulated for multicast group V2V communications with reliability constraints and half-duplex limitation. Furthermore, the problem is analyzed for cloud-based sidelink RRM and a dynamic vehicular environment. Accounting for the challenges in acquiring channel state information (CSI), a low-complexity scheduling scheme is presented that makes use of slowly varying large-scale channel parameters (e.g. path loss). Simulation results show significant gains in terms of packet delay performance while meeting the reliability requirements on V2V links.

Optimal Receive Beamwidth for Time Varying Vehicular Channels

Yoonseong Kang and Hyowoon Seo (KAIST, Korea (South)); Wan Choi (Seoul National University & KAIST, Korea (South))

2
This paper studies a receive beamwidth controlling method in vehicle-to-infrastructure (V2I) wireless communication system using millimeter wave (mm-wave) band. We use a triangular beam pattern to model and characterize a mm-wave receive beam pattern. First of all, channel coherence time for line-of-sight (LoS) downlink transmission is derived under the given vehicular scenario. Then, we derive an attainable data rate for the time varying vehicular channel, by supposing that the beam is realigned whenever the channel coherence time is elapsed. In addition, the optimal receive beamwidth, which achieves the maximum point of the derived attainable data rate, is obtained. The effectiveness and feasibility of the proposed receive beamwidth controlling method is underpinned by both analytic and numerical simulation results. The results are also compared with a uniform linear array (ULA) beam pattern model and show that the triangular beam pattern model can well characterize the practical antenna model.

Cluster-based Cooperative Multicast for Multimedia Data Dissemination in Vehicular Networks

Jianan Sun and Ping Dong (Beijing Jiaotong University, China); Xiaojiang Du (Temple University, USA); Tao Zheng and Yajuan Qin (Beijing Jiaotong University, China); Mohsen Guizani (Qatar University, Qatar)

0
With the development of communication technologies, vehicular network applications have evolved from basic traffic safety and efficiency applications to information and entertainment applications. The implementation of emerging vehicular applications is based on the efficient dissemination of multimedia data. In view of the dynamic topology changes, severe channel fading and limited spectrum resources of vehicular networks, how to achieve efficient multimedia data dissemination in the harsh network environment is an urgent problem. Based on the hybrid cellular-D2D vehicular network, this paper proposes a cluster-based cooperative multicast scheme. The scheme combines multicast transmission with D2D-assisted relay technology to provide high-quality data dissemination for vehicle users under limited spectrum resources. In this paper, we innovatively present a communication quality index that considers multiple performance factors and formulate the relay selection problem as the anti p-center problem in graph theory. Then we propose a heuristic method to solve the problem. The results show that the proposed scheme can effectively improve the utilization of wireless resources and the success rate of data dissemination.

Session Chair

Prajwal Keshavamurthy (Universität Kassel, Germany)

Play Session Recording
Session T3-S19

5G

Conference
4:00 PM — 5:30 PM KST
Local
May 27 Wed, 12:00 AM — 1:30 AM PDT

Research Project to Realize Various High-reliability Communications in Advanced 5G Network

Takahide Murakami, Hiroyuki Shinbo, Yu Tsukamoto, Shinobu Nanba and Yoji Kishi (KDDI Research, Inc., Japan); Morihiko Tamai (Advanced Telecommunications Research Institute International, Japan); Hiroyuki Yokoyama (ATR, Japan); Takanori Hara and Koji Ishibashi (The University of Electro-Communications, Japan); Kensuke Tsuda and Yoshimi Fujii (Kozo Keikaku Engineering Inc., Japan); Fumiyuki Adachi, Keisuke Kasai and Masataka Nakazawa (Tohoku University, Japan); Yuta Seki (Panasonic Corporation & Core Element Technology Development Center, Japan); Takayuki Sotoyama (Panasonic Mobile Communications Co., Ltd., Japan)

1
We started a new research project in the "advanced 5G" era that aims at accommodating various types of communications involving current and emerging services with different data flow-level quality requirements. In this paper, the objectives and the technical aspects of the research project are introduced. We propose an architecture based on a virtualized radio access network (vRAN) that enables adaptive control of equipment resources and location of functions in the vRAN environment in accordance with spatially and temporally changing communication demands. The seven planned research items that are essential for realizing the advanced 5G network are listed as follows: blockage prediction, new radio access technologies (RATs) and their implementations with software-defined radio (SDR), adaptive interference and resource control, integration of radio and fiber resource control, highly efficient access transmission control, adaptive placement of BS functions, and quality aware traffic pattern prediction.

Low Complexity Channel Model for Mobility Investigations in 5G Networks

Umur Karabulut (Nokia Bell Labs, Technical University of Dresden, Germany); Ahmad Awada (Nokia Bell Labs, Germany); Andre N Barreto (Barkhausen Institut gGmbH, Germany & Universidade de Brasilia, Brazil); Ingo Viering (Nomor Research GmbH, Germany); Gerhard P. Fettweis (Technische Universität Dresden, Germany)

3
Millimeter-wave has become an integral part of 5G networks to meet the ever-increasing demand for user data throughput. Employing higher carrier frequencies introduces new challenges for the propagation channel such as higher path loss and rapid signal degradations. On the other hand, higher frequencies allow deployment of small-sized antenna elements that enable beamforming. To investigate user mobility under these new propagation conditions, a proper model is needed that captures spatial and temporal characteristics of the channel in beamformed networks. Current channel models that have been developed for 5G networks are computationally inefficient and lead to infeasible simulation time for most user mobility simulations. In this paper, we present a simplified channel model that captures the spatial and temporal characteristics of the 5G propagation channel and runs in feasible simulation time. To this end, coherence time and path diversity originating from fully fledged Geometry based Stochastic Channel Model (GSCM) are analyzed and adopted in Jake's channel model. Furthermore, the deviation of multipath beamforming gain from single ray beamforming gain is analyzed and a regression curve is obtained to be used in the system-level simulations. We show through simulations that the proposed simplified channel model leads to mobility results comparable to Jake's model for high path diversity. Moreover, the multi-path beamforming gain increases the interference in the system and in turn number of mobility failures.

Coexistence Management for URLLC in Campus Networks via Deep Reinforcement Learning

Behnam Khodapanah (TU Dresden, Germany); Tom Hößler (TU Dresden & Barkhausen Institut, Germany); Baris Alp Yuncu (TU Dresden, Germany); Andre N Barreto (Barkhausen Institut gGmbH, Germany & Universidade de Brasilia, Brazil); Meryem Simsek (Intel Labs & International Computer Science Institute, USA); Gerhard P. Fettweis (Technische Universität Dresden, Germany)

1
Increased usage of wireless technologies in unlicensed frequency bands inevitably increases the co-channel interference. Hence, for applications such as ultra-reliable-low- latency-communications (URLLC) in factory automation, the interference should be avoided. An intelligent coexistence management entity, which dynamically distributes the time and frequency resources, has been shown to be greatly beneficial in boosting efficiency and avoiding crippling interruptions of the wireless medium. This entity also supports multi-connectivity schemes, which are crucial for industry-level reliability requirements. The proposed governing technique of the coexistence management is a deep reinforcement learning (DRL) method, which is a model-free framework and channel allocation decisions are learned merely by interactions with the environment. The simulation results have shown that the employed method can greatly increase the reliability of the wireless network, when compared with legacy methods.

Modeling and Delay Analysis for SDN-Based 5G Edge Clouds

Ameen Chilwan (Norwegian University of Science and Technology, Norway); Yuming Jiang (Norwegian University of Science and Technology (NTNU), Norway)

0
The fifth generation (5G) mobile networks are envisioned to provide connectivity not only to mobile users but also to a wide range of other services such as enhanced mobile broadband (eMBB) and massive Internet of Things (mIoT). In order to meet the diverse requirements of these services in 5G, Software Defined Networking (SDN) has been proposed as an enabling technology for both the core cloud and the edge cloud, in addition to Network Slicing to achieve isolation among services. In this paper, an analytical model is developed for such an SDN- based edge cloud, focusing on the support of two services: eMBB and mIoT. To illustrate the use of the model, delay analysis of a switching node in the edge cloud is presented. The results show the relation between the packet delay and the underlying system parameters, such as slice density, and the impact of the SDN controller on the delay. An implication of the model, analysis and results is that they may be used for network / resource planning and admission control in 5G edge clouds to meet delay requirements of the services.

Zero-touch coordination framework for Self-Organizing Functions in 5G

Diego Fernando Preciado Rojas and Faiaz Nazmetdinov (Technische Universität Ilmenau, Germany); Andreas Mitschele-Thiel (Ilmenau University of Technology, Germany)

0
Traditional mobile network services are built by chaining together multiple functional boxes on which creation of new services is rather static. With the advent of 5G technology the ability to offer agile on-demand services to the users is mandatory. Therefore lifecycle operations such as service initial deployment, configuration changes, upgrades, scale-out, scale- in, optimization, self-healing etc. should be fully automated steps. Self-Organized Networks Functions (SF) were proposed to provide self-adaptation capabilities to mobile networks on different fronts: configuration, optimization and healing and somehow reduce the error-prone human intervention. Nevertheless, conventional design of these SFs was based on single objective optimization approaches where SFs were considered as standalone agents aiming at one very specific local objective (e.g. reduce the interference or increase the coverage). Thus, complex inter-dependencies between SFs were at some extent unattended, so when more than one function is acting on the network, conflicts are inevitable. A well-studied conflict happens when Mobility Load Balancing (MLB) and Mobility Robustness Optimization (MRO) functions are simultaneously set up: without coordination, performance degradation is expected because of the cross-dependencies between both SFs. To cope with these underlying conflicts, we propose a zero-touch coordination framework based on Machine Learning (ML) to automatically learn the dynamics between the selected SFs and assist the network optimization task.

Session Chair

Ameen Chilwan (Norwegian University of Science and Technology, Norway)

Play Session Recording

Made with in Toronto · Privacy Policy · © 2020 Duetone Corp.